Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Clin Med ; 13(9)2024 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-38731102

RESUMO

Background: The biomechanical analysis of spine and postural misalignments is important for surgical and non-surgical treatment of spinal pain. We investigated the examiner reliability of sagittal cervical alignment variables compared to the reliability and concurrent validity of computer vision algorithms used in the PostureRay® software 2024. Methods: A retrospective database of 254 lateral cervical radiographs of patients between the ages of 11 and 86 is studied. The radiographs include clearly visualized C1-C7 vertebrae that were evaluated by a human using the software. To evaluate examiner reliability and the concurrent validity of the trained CNN performance, two blinded trials of radiographic digitization were performed by an extensively trained expert user (US) clinician with a two-week interval between trials. Then, the same clinician used the trained CNN twice to reproduce the same measures within a 2-week interval on the same 254 radiographs. Measured variables included segmental angles as relative rotation angles (RRA) C1-C7, Cobb angles C2-C7, relative segmental translations (RT) C1-C7, anterior translation C2-C7, and absolute rotation angle (ARA) C2-C7. Data were remotely extracted from the examiner's PostureRay® system for data collection and sorted based on gender and stratification of degenerative changes. Reliability was assessed via intra-class correlations (ICC), root mean squared error (RMSE), and R2 values. Results: In comparing repeated measures of the CNN network to itself, perfect reliability was found for the ICC (1.0), RMSE (0), and R2 (1). The reliability of the trained expert US was in the excellent range for all variables, where 12/18 variables had ICCs ≥ 0.9 and 6/18 variables were 0.84 ≤ ICCs ≤ 0.89. Similarly, for the expert US, all R2 values were in the excellent range (R2 ≥ 0.7), and all RMSEs were small, being 0.42 ≤ RMSEs ≤ 3.27. Construct validity between the expert US and the CNN network was found to be in the excellent range with 18/18 ICCs in the excellent range (ICCs ≥ 0.8), 16/18 R2 values in the strong to excellent range (R2 ≥ 0.7), and 2/18 in the good to moderate range (R2 RT C6/C7 = 0.57 and R2 Cobb C6/C7 = 0.64. The RMSEs for expert US vs. the CNN network were small, being 0.37 ≤ RMSEs ≤ 2.89. Conclusions: A comparison of repeated measures within the computer vision CNN network and expert human found exceptional reliability and excellent construct validity when comparing the computer vision to the human observer.

2.
IEEE Trans Affect Comput ; 14(3): 2020-2032, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37840968

RESUMO

This paper presents our recent research on integrating artificial emotional intelligence in a social robot (Ryan) and studies the robot's effectiveness in engaging older adults. Ryan is a socially assistive robot designed to provide companionship for older adults with depression and dementia through conversation. We used two versions of Ryan for our study, empathic and non-empathic. The empathic Ryan utilizes a multimodal emotion recognition algorithm and a multimodal emotion expression system. Using different input modalities for emotion, i.e. facial expression and speech sentiment, the empathic Ryan detects users emotional state and utilizes an affective dialogue manager to generate a response. On the other hand, the non-empathic Ryan lacks facial expression and uses scripted dialogues that do not factor in the users emotional state. We studied these two versions of Ryan with 10 older adults living in a senior care facility. The statistically significant improvement in the users' reported face-scale mood measurement indicates an overall positive effect from the interaction with both the empathic and non-empathic versions of Ryan. However, the number of spoken words measurement and the exit survey analysis suggest that the users perceive the empathic Ryan as more engaging and likable.

3.
Semin Vasc Surg ; 36(3): 454-459, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37863620

RESUMO

Chronic limb-threatening ischemia (CLTI) is the most advanced form of peripheral artery disease. CLTI has an extremely poor prognosis and is associated with considerable risk of major amputation, cardiac morbidity, mortality, and poor quality of life. Early diagnosis and targeted treatment of CLTI is critical for improving patient's prognosis. However, this objective has proven elusive, time-consuming, and challenging due to existing health care disparities among patients. In this article, we reviewed how artificial intelligence (AI) and machine learning (ML) can be helpful to accurately diagnose, improve outcome prediction, and identify disparities in the treatment of CLTI. We demonstrate the importance of AI/ML approaches for management of these patients and how available data could be used for computer-guided interventions. Although AI/ML applications to mitigate health care disparities in CLTI are in their infancy, we also highlighted specific AI/ML methods that show potential for addressing health care disparities in CLTI.


Assuntos
Isquemia Crônica Crítica de Membro , Doença Arterial Periférica , Humanos , Inteligência Artificial , Disparidades em Assistência à Saúde , Qualidade de Vida , Resultado do Tratamento , Isquemia/diagnóstico , Isquemia/terapia , Doença Crônica , Doença Arterial Periférica/diagnóstico , Doença Arterial Periférica/terapia , Prognóstico , Salvamento de Membro , Aprendizado de Máquina , Fatores de Risco , Estudos Retrospectivos
4.
Sensors (Basel) ; 23(13)2023 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-37447628

RESUMO

Through wearable sensors and deep learning techniques, biomechanical analysis can reach beyond the lab for clinical and sporting applications. Transformers, a class of recent deep learning models, have become widely used in state-of-the-art artificial intelligence research due to their superior performance in various natural language processing and computer vision tasks. The performance of transformer models has not yet been investigated in biomechanics applications. In this study, we introduce a Biomechanical Multi-activity Transformer-based model, BioMAT, for the estimation of joint kinematics from streaming signals of multiple inertia measurement units (IMUs) using a publicly available dataset. This dataset includes IMU signals and the corresponding sagittal plane kinematics of the hip, knee, and ankle joints during multiple activities of daily living. We evaluated the model's performance and generalizability and compared it against a convolutional neural network long short-term model, a bidirectional long short-term model, and multi-linear regression across different ambulation tasks including level ground walking (LW), ramp ascent (RA), ramp descent (RD), stair ascent (SA), and stair descent (SD). To investigate the effect of different activity datasets on prediction accuracy, we compared the performance of a universal model trained on all activities against task-specific models trained on individual tasks. When the models were tested on three unseen subjects' data, BioMAT outperformed the benchmark models with an average root mean square error (RMSE) of 5.5 ± 0.5°, and normalized RMSE of 6.8 ± 0.3° across all three joints and all activities. A unified BioMAT model demonstrated superior performance compared to individual task-specific models across four of five activities. The RMSE values from the universal model for LW, RA, RD, SA, and SD activities were 5.0 ± 1.5°, 6.2 ± 1.1°, 5.8 ± 1.1°, 5.3 ± 1.6°, and 5.2 ± 0.7° while these values for task-specific models were, 5.3 ± 2.1°, 6.7 ± 2.0°, 6.9 ± 2.2°, 4.9 ± 1.4°, and 5.6 ± 1.3°, respectively. Overall, BioMAT accurately estimated joint kinematics relative to previous machine learning algorithms across different activities directly from the sequence of IMUs signals instead of time-normalized gait cycle data.


Assuntos
Atividades Cotidianas , Dispositivos Eletrônicos Vestíveis , Humanos , Fenômenos Biomecânicos , Inteligência Artificial , Caminhada , Marcha , Articulação do Joelho
5.
PLoS One ; 17(10): e0275281, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36301975

RESUMO

The study of gaze perception has largely focused on a single cue (the eyes) in two-dimensional settings. While this literature suggests that 2D gaze perception is shaped by atypical development, as in Autism Spectrum Disorder (ASD), gaze perception is in reality contextually-sensitive, perceived as an emergent feature conveyed by the rotation of the pupils and head. We examined gaze perception in this integrative context, across development, among children and adolescents developing typically or with ASD with both 2D and 3D stimuli. We found that both groups utilized head and pupil rotations to judge gaze on a 2D face. But when evaluating the gaze of a physically-present, 3D robot, the same ASD observers used eye cues less than their typically-developing peers. This demonstrates that emergent gaze perception is a slowly developing process that is surprisingly intact, albeit weakened in ASD, and illustrates how new technology can bridge visual and clinical science.


Assuntos
Transtorno do Espectro Autista , Criança , Adolescente , Humanos , Fixação Ocular , Pupila , Sinais (Psicologia) , Percepção
6.
Front Robot AI ; 9: 965369, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35880215

RESUMO

[This corrects the article DOI: 10.3389/frobt.2022.855819.].

7.
Front Robot AI ; 9: 855819, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35677082

RESUMO

Children with Autism Spectrum Disorder (ASD) experience deficits in verbal and nonverbal communication skills including motor control, turn-taking, and emotion recognition. Innovative technology, such as socially assistive robots, has shown to be a viable method for Autism therapy. This paper presents a novel robot-based music-therapy platform for modeling and improving the social responses and behaviors of children with ASD. Our autonomous social interactive system consists of three modules. Module one provides an autonomous initiative positioning system for the robot, NAO, to properly localize and play the instrument (Xylophone) using the robot's arms. Module two allows NAO to play customized songs composed by individuals. Module three provides a real-life music therapy experience to the users. We adopted Short-time Fourier Transform and Levenshtein distance to fulfill the design requirements: 1) "music detection" and 2) "smart scoring and feedback", which allows NAO to understand music and provide additional practice and oral feedback to the users as applicable. We designed and implemented six Human-Robot-Interaction (HRI) sessions including four intervention sessions. Nine children with ASD and seven Typically Developing participated in a total of fifty HRI experimental sessions. Using our platform, we collected and analyzed data on social behavioral changes and emotion recognition using Electrodermal Activity (EDA) signals. The results of our experiments demonstrate most of the participants were able to complete motor control tasks with 70% accuracy. Six out of the nine ASD participants showed stable turn-taking behavior when playing music. The results of automated emotion classification using Support Vector Machines illustrates that emotional arousal in the ASD group can be detected and well recognized via EDA bio-signals. In summary, the results of our data analyses, including emotion classification using EDA signals, indicate that the proposed robot-music based therapy platform is an attractive and promising assistive tool to facilitate the improvement of fine motor control and turn-taking skills in children with ASD.

8.
Sci Data ; 8(1): 66, 2021 02 24.
Artigo em Inglês | MEDLINE | ID: mdl-33627669

RESUMO

In recent years, fingerprint-based positioning has gained researchers' attention since it is a promising alternative to the Global Navigation Satellite System and cellular network-based localization in urban areas. Despite this, the lack of publicly available datasets that researchers can use to develop, evaluate, and compare fingerprint-based positioning solutions constitutes a high entry barrier for studies. As an effort to overcome this barrier and foster new research efforts, this paper presents OutFin, a novel dataset of outdoor location fingerprints that were collected using two different smartphones. OutFin is comprised of diverse data types such as WiFi, Bluetooth, and cellular signal strengths, in addition to measurements from various sensors including the magnetometer, accelerometer, gyroscope, barometer, and ambient light sensor. The collection area spanned four dispersed sites with a total of 122 reference points. Each site is different in terms of its visibility to the Global Navigation Satellite System and reference points' number, arrangement, and spacing. Before OutFin was made available to the public, several experiments were conducted to validate its technical quality.

9.
Sensors (Basel) ; 20(19)2020 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-32998329

RESUMO

Quantitative assessments of patient movement quality in osteoarthritis (OA), specifically spatiotemporal gait parameters (STGPs), can provide in-depth insight into gait patterns, activity types, and changes in mobility after total knee arthroplasty (TKA). A study was conducted to benchmark the ability of multiple deep neural network (DNN) architectures to predict 12 STGPs from inertial measurement unit (IMU) data and to identify an optimal sensor combination, which has yet to be studied for OA and TKA subjects. DNNs were trained using movement data from 29 subjects, walking at slow, normal, and fast paces and evaluated with cross-fold validation over the subjects. Optimal sensor locations were determined by comparing prediction accuracy with 15 IMU configurations (pelvis, thigh, shank, and feet). Percent error across the 12 STGPs ranged from 2.1% (stride time) to 73.7% (toe-out angle) and overall was more accurate in temporal parameters than spatial parameters. The most and least accurate sensor combinations were feet-thighs and singular pelvis, respectively. DNNs showed promising results in predicting STGPs for OA and TKA subjects based on signals from IMU sensors and overcomes the dependency on sensor locations that can hinder the design of patient monitoring systems for clinical application.


Assuntos
Artroplastia do Joelho , Aprendizado Profundo , Marcha , Osteoartrite , Humanos , Osteoartrite/fisiopatologia , Caminhada
10.
J Neurosci Methods ; 335: 108621, 2020 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-32027889

RESUMO

BACKGROUND: Recognition of human behavioral activities using local field potential (LFP) signals recorded from the Subthalamic Nuclei (STN) has applications in developing the next generation of deep brain stimulation (DBS) systems. DBS therapy is often used for patients with Parkinson's disease (PD) when medication cannot effectively tackle patients' motor symptoms. A DBS system capable of adaptively adjusting its parameters based on patients' activities may optimize therapy while reducing the stimulation side effects and improving the battery life. METHOD: STN-LFP reveals motor and language behavior, making it a reliable source for behavior classification. This paper presents LFP-Net, an automated machine learning framework based on deep convolutional neural networks (CNN) for classification of human behavior using the time-frequency representation of STN-LFPs within the beta frequency range. CNNs learn different features based on the beta power patterns associated with different behaviors. The features extracted by the CNNs are passed through fully connected layers and then to the softmax layer for classification. RESULTS: Our experiments on ten PD patients performing three behavioral tasks including "button press", "target reaching", and "speech" show that the proposed approach obtains an average classification accuracy of ∼88 %. Comparison with existing methods: The proposed method outperforms other state-of-the-art classification methods based on STN-LFP signals. Compared to well-known deep neural networks such as AlexNet, our approach gives a higher accuracy using significantly fewer parameters. CONCLUSIONS: CNNs show a high performance in decoding the brain neural response, which is crucial in designing the automatic brain-computer interfaces and closed-loop systems.


Assuntos
Estimulação Encefálica Profunda , Aprendizado Profundo , Doença de Parkinson , Núcleo Subtalâmico , Humanos , Doença de Parkinson/terapia , Fala
11.
Dev Sci ; 23(2): e12886, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31271685

RESUMO

Gaze is an emergent visual feature. A person's gaze direction is perceived not just based on the rotation of their eyes, but also their head. At least among adults, this integrative process appears to be flexible such that one feature can be weighted more heavily than the other depending on the circumstances. Yet it is unclear how this weighting might vary across individuals or across development. When children engage emergent gaze, do they prioritize cues from the head and eyes similarly to adults? Is the perception of gaze among individuals with autism spectrum disorder (ASD) emergent, or is it reliant on a single feature? Sixty adults (M = 29.86 years-of-age), thirty-seven typically developing children and adolescents (M = 9.3 years-of-age; range = 7-15), and eighteen children with ASD (M = 9.72 years-of-age; range = 7-15) viewed faces with leftward, rightward, or direct head rotations in conjunction with leftward or rightward pupil rotations, and then indicated whether the face was looking leftward or rightward. All individuals, across development and ASD status, used head rotation to infer gaze direction, albeit with some individual differences. However, the use of pupil rotation was heavily dependent on age. Finally, children with ASD used pupil rotation significantly less than typically developing (TD) children when inferring gaze direction, even after accounting for age. Our approach provides a novel framework for understanding individual and group differences in gaze as it is actually perceived-as an emergent feature. Furthermore, this study begins to address an important gap in ASD literature, taking the first look at emergent gaze perception in this population.


Assuntos
Fixação Ocular/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Transtorno do Espectro Autista/fisiopatologia , Criança , Sinais (Psicologia) , Face , Feminino , Humanos , Masculino , Pupila , Rotação
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 4720-4723, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30441403

RESUMO

This paper presents the results of our recent work on studying the effects of deep brain stimulation (DBS) and medication on the dynamics of brain local field potential (LFP) signals used for behavior analysis of patients with Parkinson's disease (PD). DBS is a technique used to alleviate the severe symptoms of PD when pharmacotherapy is not very effective. Behavior recognition from the LFP signals recorded from the subthalamic nucleus (STN) has application in developing closed-loop DBS systems, where the stimulation pulse is adaptively generated according to subjects' performing behavior. Most of the existing studies on behavior recognition that use STN-LFPs are based on the DBS being "off". This paper discovers how the performance and accuracy of automated behavior recognition from the LFP signals are affected under different paradigms of stimulation on/off. We first study the notion of beta power suppression in LFP signals under different scenarios (stimulation on/off and medication on/off). Afterward, we explore the accuracy of support vector machines in predicting human actions ("button press" and "reach") using the spectrogram of STN-LFP signals. Our experiments on the recorded LFP signals of three subjects confirm that the beta power is suppressed significantly when the patients take medication (p-value < 0.002) or stimulation (p-value < 0.0003). The results also show that we can classify different behaviors with a reasonable accuracy of 85% even when the high-amplitude stimulation is applied.


Assuntos
Estimulação Encefálica Profunda , Doença de Parkinson , Núcleo Subtalâmico , Humanos , Máquina de Vetores de Suporte
13.
IEEE Trans Neural Syst Rehabil Eng ; 26(1): 216-223, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28945597

RESUMO

Deep brain stimulation (DBS) provides significant therapeutic benefit for movement disorders, such as Parkinson's disease (PD). Current DBS devices lack real-time feedback (thus are open loop) and stimulation parameters are adjusted during scheduled visits with a clinician. A closed-loop DBS system may reduce power consumption and side effects by adjusting stimulation parameters based on patient's behavior. Subthalamic nucleus (STN) local field potential (LFP) is a great candidate signal for the neural feedback, because it can be recorded from the stimulation lead and does not require additional sensors. In this paper, we introduce a behavior detection method capable of asynchronously detecting the finger movements of PD patients. Our study indicates that there is a motor-modulated inter-hemispheric connectivity between LFP signals recorded bilaterally from the STN. We utilize a non-linear regression method to measure this inter-hemispheric connectivity for detecting finger movement. Our experimental results, using the recordings from 11 patients with PD, demonstrate that this approach is applicable for behavior detection in the majority of subjects (average area under curve of 70±12%).


Assuntos
Encéfalo/fisiologia , Estimulação Encefálica Profunda/métodos , Movimento , Núcleo Subtalâmico/fisiopatologia , Idoso , Algoritmos , Potenciais Evocados , Retroalimentação , Feminino , Dedos/fisiologia , Lateralidade Funcional , Humanos , Masculino , Pessoa de Meia-Idade , Vias Neurais , Dinâmica não Linear , Doença de Parkinson/reabilitação , Curva ROC , Núcleo Subtalâmico/anatomia & histologia
14.
J Neurosci Methods ; 293: 254-263, 2018 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-29017898

RESUMO

BACKGROUND: Classification of human behavior from brain signals has potential application in developing closed-loop deep brain stimulation (DBS) systems. This paper presents a human behavior classification using local field potential (LFP) signals recorded from subthalamic nuclei (STN). METHOD: A hierarchical classification structure is developed to perform the behavior classification from LFP signals through a multi-level framework (coarse to fine). At each level, the time-frequency representations of all six signals from the DBS leads are combined through an MKL-based SVM classifier to classify five tasks (speech, finger movement, mouth movement, arm movement, and random segments). To lower the computational cost, we alternatively use the inter-hemispheric synchronization of the LFPs to make three pairs out of six bipolar signals. Three classifiers are separately trained at each level of the hierarchical approach, which lead to three labels. A fusion function is then developed to combine these three labels and determine the label of the corresponding trial. RESULTS: Using all six LFPs with the proposed hierarchical approach improves the classification performance. Moreover, the synchronization-based method reduces the computational burden considerably while the classification performance remains relatively unchanged. COMPARISON WITH EXISTING METHODS: Our experiments on two different datasets recorded from nine subjects undergoing DBS surgery show that the proposed approaches remarkably outperform other methods for behavior classification based on LFP signals. CONCLUSION: The LFP signals acquired from STNs contain useful information for recognizing human behavior. This can be a precursor for designing the next generation of closed-loop DBS systems.


Assuntos
Atividade Motora/fisiologia , Fala/fisiologia , Núcleo Subtalâmico/fisiologia , Máquina de Vetores de Suporte , Análise de Ondaletas , Idoso , Sincronização Cortical , Estimulação Encefálica Profunda/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Boca/fisiologia , Análise Multinível , Doença de Parkinson/fisiopatologia , Núcleo Subtalâmico/fisiopatologia , Extremidade Superior/fisiologia
15.
Brain Sci ; 6(4)2016 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-27916831

RESUMO

Subthalamic nucleus (STN) local field potentials (LFP) are neural signals that have been shown to reveal motor and language behavior, as well as pathological parkinsonian states. We use a research-grade implantable neurostimulator (INS) with data collection capabilities to record STN-LFP outside the operating room to determine the reliability of the signals over time and assess their dynamics with respect to behavior and dopaminergic medication. Seven subjects were implanted with the recording augmented deep brain stimulation (DBS) system, and bilateral STN-LFP recordings were collected in the clinic over twelve months. Subjects were cued to perform voluntary motor and language behaviors in on and off medication states. The STN-LFP recorded with the INS demonstrated behavior-modulated desynchronization of beta frequency (13-30 Hz) and synchronization of low gamma frequency (35-70 Hz) oscillations. Dopaminergic medication did not diminish the relative beta frequency oscillatory desynchronization with movement. However, movement-related gamma frequency oscillatory synchronization was only observed in the medication on state. We observed significant inter-subject variability, but observed consistent STN-LFP activity across recording systems and over a one-year period for each subject. These findings demonstrate that an INS system can provide robust STN-LFP recordings in ambulatory patients, allowing for these signals to be recorded in settings that better represent natural environments in which patients are in a variety of medication states.

16.
J Med Imaging (Bellingham) ; 3(4): 044501, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27872871

RESUMO

Cancer is the second leading cause of death in US after cardiovascular disease. Image-based computer-aided diagnosis can assist physicians to efficiently diagnose cancers in early stages. Existing computer-aided algorithms use hand-crafted features such as wavelet coefficients, co-occurrence matrix features, and recently, histogram of shearlet coefficients for classification of cancerous tissues and cells in images. These hand-crafted features often lack generalizability since every cancerous tissue and cell has a specific texture, structure, and shape. An alternative approach is to use convolutional neural networks (CNNs) to learn the most appropriate feature abstractions directly from the data and handle the limitations of hand-crafted features. A framework for breast cancer detection and prostate Gleason grading using CNN trained on images along with the magnitude and phase of shearlet coefficients is presented. Particularly, we apply shearlet transform on images and extract the magnitude and phase of shearlet coefficients. Then we feed shearlet features along with the original images to our CNN consisting of multiple layers of convolution, max pooling, and fully connected layers. Our experiments show that using the magnitude and phase of shearlet coefficients as extra information to the network can improve the accuracy of detection and generalize better compared to the state-of-the-art methods that rely on hand-crafted features. This study expands the application of deep neural networks into the field of medical image analysis, which is a difficult domain considering the limited medical data available for such analysis.

17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 1030-1033, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-28268500

RESUMO

Deep Brain Stimulation (DBS) has gained increasing attention as an effective method to mitigate Parkinson's disease (PD) disorders. Existing DBS systems are open-loop such that the system parameters are not adjusted automatically based on patient's behavior. Classification of human behavior is an important step in the design of the next generation of DBS systems that are closed-loop. This paper presents a classification approach to recognize such behavioral tasks using the subthalamic nucleus (STN) Local Field Potential (LFP) signals. In our approach, we use the time-frequency representation (spectrogram) of the raw LFP signals recorded from left and right STNs as the feature vectors. Then these features are combined together via Support Vector Machines (SVM) with Multiple Kernel Learning (MKL) formulation. The MKL-based classification method is utilized to classify different tasks: button press, mouth movement, speech, and arm movement. Our experiments show that the lp-norm MKL significantly outperforms single kernel SVM-based classifiers in classifying behavioral tasks of five subjects even using signals acquired with a low sampling rate of 10 Hz. This leads to a lower computational cost.


Assuntos
Algoritmos , Estimulação Encefálica Profunda/métodos , Monitorização Fisiológica/métodos , Núcleo Subtalâmico/fisiologia , Braço/fisiopatologia , Feminino , Humanos , Masculino , Movimento/fisiologia , Doença de Parkinson/terapia , Fala/fisiologia , Máquina de Vetores de Suporte
18.
IEEE Trans Cybern ; 46(3): 817-26, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25861093

RESUMO

Automatic measurement of spontaneous facial action units (AUs) defined by the facial action coding system (FACS) is a challenging problem. The recent FACS user manual defines 33 AUs to describe different facial activities and expressions. In spontaneous facial expressions, a subset of AUs are often occurred or activated at a time. Given this fact that AUs occurred sparsely over time, we propose a novel method to detect the absence and presence of AUs and estimate their intensity levels via sparse representation (SR). We use the robust principal component analysis to decompose expression from facial identity and then estimate the intensity of multiple AUs jointly using a regression model formulated based on dictionary learning and SR. Our experiments on Denver intensity of spontaneous facial action and UNBC-McMaster shoulder pain expression archive databases show that our method is a promising approach for measurement of spontaneous facial AUs.


Assuntos
Identificação Biométrica/métodos , Face/anatomia & histologia , Face/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Adulto , Algoritmos , Face/fisiologia , Feminino , Humanos , Masculino , Análise de Componente Principal
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2015: 5553-6, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-26737550

RESUMO

Deep Brain Stimulation (DBS) provides significant therapeutic benefit for movement disorders such as Parkinson's disease. Current DBS devices lack real-time feedback (thus are open loop) and stimulation parameters are adjusted during scheduled visits with a clinician. A closed-loop DBS system may reduce power consumption and DBS side effects. In such systems, DBS parameters are adjusted based on patient's behavior, which means that behavior detection is a major step in designing such systems. Various physiological signals can be used to recognize the behaviors. Subthalamic Nucleus (STN) Local Field Potential (LFP) is a great candidate signal for the neural feedback, because it can be recorded from the stimulation lead and does not require additional sensors. A practical behavior detection method should be able to detect behaviors asynchronously meaning that it should not use any prior knowledge of behavior onsets. In this paper, we introduce a behavior detection method that is able to asynchronously detect the finger movements of Parkinson patients. As a result of this study, we learned that there is a motor-modulated inter-hemispheric connectivity between LFP signals recorded bilaterally from STN. We used non-linear regression method to measure this connectivity and use it to detect the finger movements. Performance of this method is evaluated using Receiver Operating Characteristic (ROC).


Assuntos
Núcleo Subtalâmico , Estimulação Encefálica Profunda , Dedos , Humanos , Movimento , Doença de Parkinson
20.
Image Vis Comput ; 32(10): 641-647, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25378765

RESUMO

The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...